A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization

نویسندگان

  • Ernesto De Vito
  • Veronica Umanità
  • Silvia Villa
چکیده

In the framework of supervised learning we prove that the iterative algorithm introduced in Umanità and Villa (2010) allows to estimate in a consistent way the relevant features of the regression function under the a priori assumption that it admits a sparse representation on a fixed dictionary.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Exploiting Covariate Similarity in Sparse Regression via the Pairwise Elastic Net

A new approach to regression regularization called the Pairwise Elastic Net is proposed. Like the Elastic Net, it simultaneously performs automatic variable selection and continuous shrinkage. In addition, the Pairwise Elastic Net encourages the grouping of strongly correlated predictors based on a pairwise similarity measure. We give examples of how the approach can be used to achieve the obje...

متن کامل

Regularization and variable selection via the elastic net

We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together.The elastic net is particularly...

متن کامل

Priors on the Variance in Sparse Bayesian Learning; the demi-Bayesian Lasso

We explore the use of proper priors for variance parameters of certain sparse Bayesian regression models. This leads to a connection between sparse Bayesian learning (SBL) models (Tipping, 2001) and the recently proposed Bayesian Lasso (Park and Casella, 2008). We outline simple modifications of existing algorithms to solve this new variant which essentially uses type-II maximum likelihood to f...

متن کامل

Making the Most of Bag of Words: Sentence Regularization with Alternating Direction Method of Multipliers

In many high-dimensional learning problems, only some parts of an observation are important to the prediction task; for example, the cues to correctly categorizing a document may lie in a handful of its sentences. We introduce a learning algorithm that exploits this intuition by encoding it in a regularizer. Specifically, we apply the sparse overlapping group lasso with one group for every bund...

متن کامل

Elastic Net for Regression with Optimal Scaling Transformations

Regularized regression methods for linear regression have been developed the last few decades to overcome the flaws of ordinary least squares regression with regard to prediction accuracy. In this chapter, three of these methods (Ridge regression, the Lasso, and the Elastic Net) are incorporated into CATREG, an optimal scaling method for both linear and nonlinear transformation of variables in ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • J. Complexity

دوره 27  شماره 

صفحات  -

تاریخ انتشار 2011